Gaussian approximation of the empirical process under random entropy conditions

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Rate of Entropy for Gaussian Processes

In this paper, we show that in order to obtain the Tsallis entropy rate for stochastic processes, we can use the limit of conditional entropy, as it was done for the case of Shannon and Renyi entropy rates. Using that we can obtain Tsallis entropy rate for stationary Gaussian processes. Finally, we derive the relation between Renyi, Shannon and Tsallis entropy rates for stationary Gaussian proc...

متن کامل

Modified empirical CLT’s under only pre-Gaussian conditions

We show that a modified Empirical process converges to the limiting Gaussian process whenever the limit is continuous. The modification depends on the properties of the limit via Talagrand’s characterization of the continuity of Gaussian processes.

متن کامل

ESTIMATING THE MEAN OF INVERSE GAUSSIAN DISTRIB WTION WITH KNOWN COEFFICIENT OF VARIATION UNDER ENTROPY LOSS

An estimation problem of the mean µ of an inverse Gaussian distribution IG(µ, C µ) with known coefficient of variation c is treated as a decision problem with entropy loss function. A class of Bayes estimators is constructed, and shown to include MRSE estimator as its closure. Two important members of this class can easily be computed using continued fractions

متن کامل

Gaussian Process Random Fields

Gaussian processes have been successful in both supervised and unsupervised machine learning tasks, but their computational complexity has constrained practical applications. We introduce a new approximation for large-scale Gaussian processes, the Gaussian Process Random Field (GPRF), in which local GPs are coupled via pairwise potentials. The GPRF likelihood is a simple, tractable, and paralle...

متن کامل

Hierarchically-partitioned Gaussian Process Approximation

The Gaussian process (GP) is a simple yet powerful probabilistic framework for various machine learning tasks. However, exact algorithms for learning and prediction are prohibitive to be applied to large datasets due to inherent computational complexity. To overcome this main limitation, various techniques have been proposed, and in particular, local GP algorithms that scales ”truly linearly” w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Stochastic Processes and their Applications

سال: 2009

ISSN: 0304-4149

DOI: 10.1016/j.spa.2008.08.001